Human-Aware Reinforcement Learning for Fault Recovery Using Contextual Gaussian Processes

نویسندگان

چکیده

This work addresses the iterated nonstationary assistant selection problem, in which over course of repeated interactions on a mission, an autonomous robot experiencing fault must select single human from among group assistants to restore it operation. The our problem have level performance that changes as function their experience solving problem. Our approach uses reinforcement learning via multi-arm bandit formulation learn about capabilities each potential and decide task. study, is built past work, evaluates for Gaussian-process-based machine method effectively model complex dynamics associated with forgetting. Application simulation shows capable tracking human-like Using novel policy called proficiency window, shown technique can outperform baseline strategies while providing guarantees use. offers effective alternative dedicated supervisors, application any human–robot system where set humans responsible overseeing operations.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Reinforcement Learning Using Gaussian Processes for Discretely Controlled Continuous Processes

In many application domains such as autonomous avionics, power electronics and process systems engineering there exist discretely controlled continuous processes (DCCPs) which constitute a special subclass of hybrid dynamical systems. We introduce a novel simulation-based approach for DDCPs optimization under uncertainty using Reinforcement Learning with Gaussian Process models to learn the tra...

متن کامل

Gaussian Processes in Reinforcement Learning

We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic abili...

متن کامل

Sample Efficient Reinforcement Learning with Gaussian Processes

This paper derives sample complexity results for using Gaussian Processes (GPs) in both modelbased and model-free reinforcement learning (RL). We show that GPs are KWIK learnable, proving for the first time that a model-based RL approach using GPs, GP-Rmax, is sample efficient (PAC-MDP). However, we then show that previous approaches to model-free RL using GPs take an exponential number of step...

متن کامل

Nonlinear Inverse Reinforcement Learning with Gaussian Processes

We present a probabilistic algorithm for nonlinear inverse reinforcement learning. The goal of inverse reinforcement learning is to learn the reward function in a Markov decision process from expert demonstrations. While most prior inverse reinforcement learning algorithms represent the reward as a linear combination of a set of features, we use Gaussian processes to learn the reward as a nonli...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of aerospace information systems

سال: 2021

ISSN: ['1940-3151', '2327-3097']

DOI: https://doi.org/10.2514/1.i010921